162 research outputs found

    Cached Geometry Manager for View-dependent LOD Rendering

    Full text link
    The new generation of commodity graphics cards with significant on-board video memory has become widely popular and provides high-performance rendering and flexibility. One of the features to be exploited with this hardware is the use of the on-board video memory to store geometry information. This strategy significantly reduces the data transfer overhead from sending geometry data over the (AGP) bus interface from main memory to the graphics card. However, taking advantage of cached geometry is not a trivial task because the data models often exceed the memory size of the graphics card. In this paper we present a dynamic Cached Geometry Manager (CGM) to address this issue. We show how this technique improves the performance of real-time view-dependent level-of-detail (LOD) selection and rendering algorithms of large data sets. Alternative caching approaches have been analyzed over two different view-dependent progressive mesh (VDPM) frameworks: one for rendering of arbitrary manifold 3D meshes, and one for terrain visualization

    Dictionary Learning-based Inpainting on Triangular Meshes

    Full text link
    The problem of inpainting consists of filling missing or damaged regions in images and videos in such a way that the filling pattern does not produce artifacts that deviate from the original data. In addition to restoring the missing data, the inpainting technique can also be used to remove undesired objects. In this work, we address the problem of inpainting on surfaces through a new method based on dictionary learning and sparse coding. Our method learns the dictionary through the subdivision of the mesh into patches and rebuilds the mesh via a method of reconstruction inspired by the Non-local Means method on the computed sparse codes. One of the advantages of our method is that it is capable of filling the missing regions and simultaneously removes noise and enhances important features of the mesh. Moreover, the inpainting result is globally coherent as the representation based on the dictionaries captures all the geometric information in the transformed domain. We present two variations of the method: a direct one, in which the model is reconstructed and restored directly from the representation in the transformed domain and a second one, adaptive, in which the missing regions are recreated iteratively through the successive propagation of the sparse code computed in the hole boundaries, which guides the local reconstructions. The second method produces better results for large regions because the sparse codes of the patches are adapted according to the sparse codes of the boundary patches. Finally, we present and analyze experimental results that demonstrate the performance of our method compared to the literature

    SenVis: Interactive Tensor-based Sensitivity Visualization

    Full text link
    Sobol's method is one of the most powerful and widely used frameworks for global sensitivity analysis, and it maps every possible combination of input variables to an associated Sobol index. However, these indices are often challenging to analyze in depth, due in part to the lack of suitable, flexible enough, and fast-to-query data access structures as well as visualization techniques. We propose a visualization tool that leverages tensor decomposition, a compressed data format that can quickly and approximately answer sophisticated queries over exponential-sized sets of Sobol indices. This way, we are able to capture the complete global sensitivity information of high-dimensional scalar models. Our application is based on a three-stage visualization, to which variables to be analyzed can be added or removed interactively. It includes a novel hourglass-like diagram presenting the relative importance for any single variable or combination of input variables with respect to any composition of the rest of the input variables. We showcase our visualization with a range of example models, whereby we demonstrate the high expressive power and analytical capability made possible with the proposed method

    Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories

    Full text link
    Recent years have seen a proliferation of new digital products for the efficient management of indoor spaces, with important applications like emergency management, virtual property showcasing and interior design. While highly innovative and effective, these products rely on accurate 3D models of the environments considered, including information on both architectural and non-permanent elements. These models must be created from measured data such as RGB-D images or 3D point clouds, whose capture and consolidation involves lengthy data workflows. This strongly limits the rate at which 3D models can be produced, preventing the adoption of many digital services for indoor space management. We provide a radical alternative to such data-intensive procedures by presentingWalk2Map, a data-driven approach to generate floor plans only from trajectories of a person walking inside the rooms. Thanks to recent advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones, which allows for an effortless and scalable mapping of real-world indoor spaces. Our work is based on learning the latent relation between an indoor walk trajectory and the information represented in a floor plan: interior space footprint, portals, and furniture. We distinguish between recovering area-related (interior footprint, furniture) and wall-related (doors) information and use two different neural architectures for the two tasks: an image-based Encoder-Decoder and a Graph Convolutional Network, respectively. We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory at inference time. We perform a qualitative and quantitative evaluation using both trajectories simulated from scanned models of interiors and measured, real-world trajectories, and compare against a baseline method for image-to-image translation. The experiments confirm that our technique is viable and allows recovering reliable floor plans from minimal walk trajectory data

    Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories

    Get PDF
    Recent years have seen a proliferation of new digital products for the efficient management of indoor spaces, with important applications like emergency management, virtual property showcasing and interior design. While highly innovative and effective, these products rely on accurate 3D models of the environments considered, including information on both architectural and non-permanent elements. These models must be created from measured data such as RGB-D images or 3D point clouds, whose capture and consolidation involves lengthy data workflows. This strongly limits the rate at which 3D models can be produced, preventing the adoption of many digital services for indoor space management. We provide a radical alternative to such data-intensive procedures by presenting Walk2Map, a data-driven approach to generate floor plans only from trajectories of a person walking inside the rooms. Thanks to recent advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones, which allows for an effortless and scalable mapping of real-world indoor spaces. Our work is based on learning the latent relation between an indoor walk trajectory and the information represented in a floor plan: interior space footprint, portals, and furniture. We distinguish between recovering area-related (interior footprint, furniture) and wall-related (doors) information and use two different neural architectures for the two tasks: an image-based Encoder-Decoder and a Graph Convolutional Network, respectively. We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory at inference time. We perform a qualitative and quantitative evaluation using both trajectories simulated from scanned models of interiors and measured, real-world trajectories, and compare against a baseline method for image-to-image translation. The experiments confirm that our technique is viable and allows recovering reliable floor plans from minimal walk trajectory data

    Tensor Approximation for Multidimensional and Multivariate Data

    Full text link
    Tensor decomposition methods and multilinear algebra are powerful tools to cope with challenges around multidimensional and multivariate data in computer graphics, image processing and data visualization, in particular with respect to compact representation and processing of increasingly large-scale data sets. Initially proposed as an extension of the concept of matrix rank for 3 and more dimensions, tensor decomposition methods have found applications in a remarkably wide range of disciplines. We briefly review the main concepts of tensor decompositions and their application to multidimensional visual data. Furthermore, we will include a first outlook on porting these techniques to multivariate data such as vector and tensor fields

    Leveraging Different Visual Designs for Communication of Severe Weather Events and their Uncertainty

    Full text link
    In this work, we present several interactive visual designs for mobile visualization of severe weather events for the communication of weather hazards, their risks, uncertainty, and recommended actions. Our approach is based on previous work on uncertainty visualization [5], cognitive science [6], and decision sciences for risk management [3, 4]. We propose six configurations that vary the ratio of text vs graphics used in the visual display, and the interaction workflow needed for a non-expert user to make an informed decision and effective actions. Our goal is to test how efficient these configurations are and to what degree they are suitable to communicate weather hazards, associated uncertainty, risk, and recommended actions to non-experts. Future steps include two cycle of evaluations, consisting of a first pilot to rapidly test the prototype with a small number of participants, collect actionable insights, and incorporate potential improvements. In a second user study, we will perform a crowd-sourced extensive evaluation of the visualization prototypes

    DanceMoves: A Visual Analytics Tool for Dance Movement Analysis

    Full text link
    Analyzing body movement as a means of expression is of interest in diverse areas, such as dance, sports, films, as well as anthropology or archaeology. In particular, in choreography, body movements are at the core of artistic expression. Dance moves are composed of spatial and temporal structures that are difficult to address without interactive visual data analysis tools. We present a visual analytics solution that allows the user to get an overview of, compare, and visually search dance move features in video archives. With the help of similarity measures, a user can compare dance moves and assess dance poses. We illustrate our approach through three use cases and an analysis of the performance of our similarity measures. The expert feedback and the experimental results show that 75% to 80% of dance moves can correctly be categorized. Domain experts recognize great potential in this standardized analysis. Comparative and motion analysis allows them to get detailed insights into temporal and spatial development of motion patterns and poses

    Compressed Progressive Meshes

    Get PDF
    Most systems that support the visual interaction with 3D models use shape representations based on triangle meshes. The size of these representations imposes limits on applications, where complex 3D models must be accessed remotely. Techniques for simplifying and compressing 3D models reduce the transmission time. Multi-resolution formats provide quick access to a crude model and then refine it progressively. Unfortunately, compared to the best non-progressive compression methods, previously proposed progressive refinement techniques impose a signitifant overhead when the full resolution model must be downloaded. The CPM (Compressed Progressive Meshes) appreach proposed here eliminates this overhead. It uses a new "patching" technique, which refines the topology of the mesh in batches, which each increase the number of vertices by up to 50%. Less than 4 bits per triangle encode where and how the topological refinements should be applied. We estimate the position of new vertices from the positions of their topological neighbors in the less refined mesh using a new estimator that leads to representations of vertex coordinates that are 50% more compact than previously reported progressive geometry compression techniques

    Campus Explorer: Facilitating Student Communities through Gaming

    Full text link
    University students are often highly focused on their current lectures and imminent exams and thus neglect to interact with students across departments and to engage in campus life. To facilitate a more closely-knit community of university students, we evaluate a set of suitable core game mechanics, social features, and reward systems to motivate students to explore their university and to meet other students. Our prototype mobile application implements a location-based approach and includes game mechanics such as building check-ins, meeting other students, campus expeditions, and campus events. We evaluate the potential of our approach using both qualitative and quantitative data collected during an initial playtesting phase. Our analysis has shown that our location-based mechanics and a focus on social features were well received by students. Players engaged in exploring the campus and see potential in location-sharing and future collaboration features
    corecore